自然语言和生物学序列之间的明显相似之处已导致最新的深层语言模型(LMS)在抗体和其他生物学序列分析中的应用激增。但是,缺乏对生物序列语言的严格语言形式化,这些语言将定义基本组成部分,例如词典(即语言的离散单元)和语法(即,将序列序列良好的规则,结构和结构和结构和结构和结构链接的规则链接在一起含义)导致了LMS的主要域无规定应用,这些应用未考虑研究的生物序列的基础结构。另一方面,语言形式化为LM应用建立了语言信息,因此适应域的组件。它将有助于更好地理解自然语言和生物序列之间的差异和相似性如何影响LMS的质量,这对于具有可解释的模型具有可解释的模型至关重要。解密抗体特异性规则对于加速有理和硅生物治疗药物设计至关重要。在这里,我们将抗体语言的特性形式化,因此不仅建立了语言工具在适应性免疫受体分析中应用的基础,而且还为免疫受体特异性的系统免疫语言学研究提供了基础。
translated by 谷歌翻译
基于神经网络的深层语言模型(LMS)越来越多地应用于大规模蛋白质序列数据以预测蛋白质功能。然而,作为黑框模型,当前的蛋白质LM方法并不促进对序列功能映射的基本理解,而阻碍了基于规则的生物治疗药物开发,因此目前的蛋白质LM方法不大。我们认为,从语言学中得出的指导是从自然语言数据中提取分析规则的领域,可以帮助构建学习相关领域特定规则的更容易解释的蛋白质LM。与自然语言LMS相比,蛋白质序列数据和语言序列数据之间的差异需要在蛋白质LMS中集成更多的域特异性知识。在这里,我们为培训数据,令牌化,令牌嵌入,序列嵌入和模型解释提供了基于语言学的路线图。将语言学与蛋白质LMS结合起来,可以发展下一代可解释的机器学习模型,并有可能发现序列功能关系基础的生物学机制。
translated by 谷歌翻译
对异常域特定视频集的有效分析是一个重要的实践问题,在该问题中,最新的通用模型仍面临局限性。因此,希望设计基准数据集,以挑战具有其他约束的特定领域的新型强大模型。重要的是要记住,特定域的数据可能更嘈杂(例如,内窥镜或水下视频),并且通常需要更多经验丰富的用户才能有效搜索。在本文中,我们专注于从水下环境中移动相机拍摄的单次视频,这构成了研究目的的非平凡挑战。提出了新的海洋视频套件数据集的第一个碎片,用于用于视频检索和其他计算机视觉挑战。除了基本的元数据统计数据外,我们还基于低级特征以及所选密钥帧的语义注释提供了几个见解和参考图。该分析还包含实验,显示了检索受人尊敬的通用模型的局限性。
translated by 谷歌翻译
在过去的几年中,已经引入了许多基于输入数据扰动的解释方法,以提高我们对黑盒模型做出的决策的理解。这项工作的目的是引入一种新颖的扰动方案,以便可以获得更忠实和强大的解释。我们的研究重点是扰动方向对数据拓扑的影响。我们表明,在对离散的Gromov-Hausdorff距离的最坏情况分析以及通过持久的同源性的平均分析中,沿输入歧管的正交方向的扰动更好地保留了数据拓扑。从这些结果中,我们引入EMAP算法,实现正交扰动方案。我们的实验表明,EMAP不仅改善了解释者的性能,而且还可以帮助他们克服最近对基于扰动的方法的攻击。
translated by 谷歌翻译
The automated segmentation and tracking of macrophages during their migration are challenging tasks due to their dynamically changing shapes and motions. This paper proposes a new algorithm to achieve automatic cell tracking in time-lapse microscopy macrophage data. First, we design a segmentation method employing space-time filtering, local Otsu's thresholding, and the SUBSURF (subjective surface segmentation) method. Next, the partial trajectories for cells overlapping in the temporal direction are extracted in the segmented images. Finally, the extracted trajectories are linked by considering their direction of movement. The segmented images and the obtained trajectories from the proposed method are compared with those of the semi-automatic segmentation and manual tracking. The proposed tracking achieved 97.4% of accuracy for macrophage data under challenging situations, feeble fluorescent intensity, irregular shapes, and motion of macrophages. We expect that the automatically extracted trajectories of macrophages can provide pieces of evidence of how macrophages migrate depending on their polarization modes in the situation, such as during wound healing.
translated by 谷歌翻译
Diabetic Retinopathy (DR) is a leading cause of vision loss in the world, and early DR detection is necessary to prevent vision loss and support an appropriate treatment. In this work, we leverage interactive machine learning and introduce a joint learning framework, termed DRG-Net, to effectively learn both disease grading and multi-lesion segmentation. Our DRG-Net consists of two modules: (i) DRG-AI-System to classify DR Grading, localize lesion areas, and provide visual explanations; (ii) DRG-Expert-Interaction to receive feedback from user-expert and improve the DRG-AI-System. To deal with sparse data, we utilize transfer learning mechanisms to extract invariant feature representations by using Wasserstein distance and adversarial learning-based entropy minimization. Besides, we propose a novel attention strategy at both low- and high-level features to automatically select the most significant lesion information and provide explainable properties. In terms of human interaction, we further develop DRG-Net as a tool that enables expert users to correct the system's predictions, which may then be used to update the system as a whole. Moreover, thanks to the attention mechanism and loss functions constraint between lesion features and classification features, our approach can be robust given a certain level of noise in the feedback of users. We have benchmarked DRG-Net on the two largest DR datasets, i.e., IDRID and FGADR, and compared it to various state-of-the-art deep learning networks. In addition to outperforming other SOTA approaches, DRG-Net is effectively updated using user feedback, even in a weakly-supervised manner.
translated by 谷歌翻译
Research has shown that climate change creates warmer temperatures and drier conditions, leading to longer wildfire seasons and increased wildfire risks in the United States. These factors have in turn led to increases in the frequency, extent, and severity of wildfires in recent years. Given the danger posed by wildland fires to people, property, wildlife, and the environment, there is an urgency to provide tools for effective wildfire management. Early detection of wildfires is essential to minimizing potentially catastrophic destruction. In this paper, we present our work on integrating multiple data sources in SmokeyNet, a deep learning model using spatio-temporal information to detect smoke from wildland fires. Camera image data is integrated with weather sensor measurements and processed by SmokeyNet to create a multimodal wildland fire smoke detection system. We present our results comparing performance in terms of both accuracy and time-to-detection for multimodal data vs. a single data source. With a time-to-detection of only a few minutes, SmokeyNet can serve as an automated early notification system, providing a useful tool in the fight against destructive wildfires.
translated by 谷歌翻译
Configurable software systems are employed in many important application domains. Understanding the performance of the systems under all configurations is critical to prevent potential performance issues caused by misconfiguration. However, as the number of configurations can be prohibitively large, it is not possible to measure the system performance under all configurations. Thus, a common approach is to build a prediction model from a limited measurement data to predict the performance of all configurations as scalar values. However, it has been pointed out that there are different sources of uncertainty coming from the data collection or the modeling process, which can make the scalar predictions not certainly accurate. To address this problem, we propose a Bayesian deep learning based method, namely BDLPerf, that can incorporate uncertainty into the prediction model. BDLPerf can provide both scalar predictions for configurations' performance and the corresponding confidence intervals of these scalar predictions. We also develop a novel uncertainty calibration technique to ensure the reliability of the confidence intervals generated by a Bayesian prediction model. Finally, we suggest an efficient hyperparameter tuning technique so as to train the prediction model within a reasonable amount of time whilst achieving high accuracy. Our experimental results on 10 real-world systems show that BDLPerf achieves higher accuracy than existing approaches, in both scalar performance prediction and confidence interval estimation.
translated by 谷歌翻译
Self-Supervised Learning (SSL) is crucial for real-world applications, especially in data-hungry domains such as healthcare and self-driving cars. In addition to a lack of labeled data, these applications also suffer from distributional shifts. Therefore, an SSL method should provide robust generalization and uncertainty estimation in the test dataset to be considered a reliable model in such high-stakes domains. However, existing approaches often focus on generalization, without evaluating the model's uncertainty. The ability to compare SSL techniques for improving these estimates is therefore critical for research on the reliability of self-supervision models. In this paper, we explore variants of SSL methods, including Jigsaw Puzzles, Context, Rotation, Geometric Transformations Prediction for vision, as well as BERT and GPT for language tasks. We train SSL in auxiliary learning for vision and pre-training for language model, then evaluate the generalization (in-out classification accuracy) and uncertainty (expected calibration error) across different distribution covariate shift datasets, including MNIST-C, CIFAR-10-C, CIFAR-10.1, and MNLI. Our goal is to create a benchmark with outputs from experiments, providing a starting point for new SSL methods in Reliable Machine Learning. All source code to reproduce results is available at https://github.com/hamanhbui/reliable_ssl_baselines.
translated by 谷歌翻译
Real-time individual endpoint prediction has always been a challenging task but of great clinic utility for both patients and healthcare providers. With 6,879 chronic kidney disease stage 4 (CKD4) patients as a use case, we explored the feasibility and performance of gated recurrent units with decay that models Weibull probability density function (GRU-D-Weibull) as a semi-parametric longitudinal model for real-time individual endpoint prediction. GRU-D-Weibull has a maximum C-index of 0.77 at 4.3 years of follow-up, compared to 0.68 achieved by competing models. The L1-loss of GRU-D-Weibull is ~66% of XGB(AFT), ~60% of MTLR, and ~30% of AFT model at CKD4 index date. The average absolute L1-loss of GRU-D-Weibull is around one year, with a minimum of 40% Parkes serious error after index date. GRU-D-Weibull is not calibrated and significantly underestimates true survival probability. Feature importance tests indicate blood pressure becomes increasingly important during follow-up, while eGFR and blood albumin are less important. Most continuous features have non-linear/parabola impact on predicted survival time, and the results are generally consistent with existing knowledge. GRU-D-Weibull as a semi-parametric temporal model shows advantages in built-in parameterization of missing, native support for asynchronously arrived measurement, capability of output both probability and point estimates at arbitrary time point for arbitrary prediction horizon, improved discrimination and point estimate accuracy after incorporating newly arrived data. Further research on its performance with more comprehensive input features, in-process or post-process calibration are warranted to benefit CKD4 or alike terminally-ill patients.
translated by 谷歌翻译